Goto

Collaborating Authors

 section 230


California investigates Grok over AI deepfakes

BBC News

California's top prosecutor has launched an investigation into the spread of sexualised AI deepfakes generated by Elon Musk's AI model Grok. Attorney General Rob Bonta said in a statement announcing the probe: The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. California's inquiry comes as British Prime Minister Sir Keir Starmer warns of possible action against X. In Wednesday's statement, Bonta said: This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. The Democratic prosecutor urged xAI to take immediate action.


9th Circuit clears Grindr, dating app for gay men, in child sex trafficking case

Los Angeles Times

Grindr, the dating app that caters to gay men, cannot be held responsible for the rape of a 15-year-old boy who the company matched with sexual predators, the U.S. 9th Circuit Court of Appeals ruled this week; it is the latest teens-versus-tech spat in a fight over internet immunity experts say could soon come before the U.S. Supreme Court. The appellate court's ruling upheld a 2023 decision by U.S. District Judge Otis D. Wright II of the Central District of California, who dismissed the suit, saying Grindr was shielded by broad immunity protections passed almost a decade before the plaintiff was born. In a series of events Wright called "alarming and tragic," a closeted Nova Scotia teen downloaded the LGBTQ hookup app in an attempt to meet other gay kids in his rural Canadian town. Instead, over the course of four days, he was assaulted by four adult men, including a man who picked him up after the teen sent him pictures from his high school cafeteria. LGBTQ social networking platform Grindr last year told its all-remote staff they had to return to the office or lose their jobs.


This New AI Search Engine Has a Gimmick: Humans Answering Questions

WIRED

When online search engines first appeared, they seemed miraculous. It is a truth near-universally acknowledged that search is in the dumps, corroded by spam and ads. Big players like Google are insistent that AI is the savior of search, despite many early attempts to integrate AI ending in disaster. Recently, I got an email promoting another new AI search engine--but this one has a notably quirky approach to answering questions. Called Pearl, it's coming out of beta this week.


Is the U.S. Legal System Ready for AI's Challenges to Human Values?

Cheong, Inyoung, Caliskan, Aylin, Kohno, Tadayoshi

arXiv.org Artificial Intelligence

Our interdisciplinary study investigates how effectively U.S. laws confront the challenges posed by Generative AI to human values. Through an analysis of diverse hypothetical scenarios crafted during an expert workshop, we have identified notable gaps and uncertainties within the existing legal framework regarding the protection of fundamental values, such as privacy, autonomy, dignity, diversity, equity, and physical/mental well-being. Constitutional and civil rights, it appears, may not provide sufficient protection against AI-generated discriminatory outputs. Furthermore, even if we exclude the liability shield provided by Section 230, proving causation for defamation and product liability claims is a challenging endeavor due to the intricate and opaque nature of AI systems. To address the unique and unforeseeable threats posed by Generative AI, we advocate for legal frameworks that evolve to recognize new threats and provide proactive, auditable guidelines to industry stakeholders. Addressing these issues requires deep interdisciplinary collaborations to identify harms, values, and mitigation strategies.


ChatGPT is easily exploited for political messaging despite OpenAI's policies

Engadget

In March, OpenAI sought to head off concerns that its immensely popular, albeit hallucination-prone, ChatGPT generative AI could be used to dangerously amplify political disinformation campaigns through an update to the company's Usage Policy to expressly prohibit such behavior. However, an investigation by The Washington Post shows that the chatbot is still easily incited to breaking those rules, with potentially grave repercussions for the 2024 election cycle. OpenAI's user policies specifically ban its use for political campaigning, save for use by "grassroots advocacy campaigns" organizations. This includes generating campaign materials in high volumes, targeting those materials at specific demographics, building campaign chatbots to disseminate information, engage in political advocacy or lobbying. Open AI told Semafor in April that it was, "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying."


Where's the Liability in Harmful AI Speech?

Henderson, Peter, Hashimoto, Tatsunori, Lemley, Mark

arXiv.org Artificial Intelligence

Generative AI, in particular text-based "foundation models" (large models trained on a huge variety of information including the internet), can generate speech that could be problematic under a wide range of liability regimes. Machine learning practitioners regularly "red team" models to identify and mitigate such problematic speech: from "hallucinations" falsely accusing people of serious misconduct to recipes for constructing an atomic bomb. A key question is whether these red-teamed behaviors actually present any liability risk for model creators and deployers under U.S. law, incentivizing investments in safety mechanisms. We examine three liability regimes, tying them to common examples of red-teamed model behaviors: defamation, speech integral to criminal conduct, and wrongful death. We find that any Section 230 immunity analysis or downstream liability analysis is intimately wrapped up in the technical details of algorithm design. And there are many roadblocks to truly finding models (and their associated parties) liable for generated speech. We argue that AI should not be categorically immune from liability in these scenarios and that as courts grapple with the already fine-grained complexities of platform algorithms, the technical details of generative AI loom above with thornier questions. Courts and policymakers should think carefully about what technical design incentives they create as they evaluate these issues.


Open-Source Large Language Models Outperform Crowd Workers and Approach ChatGPT in Text-Annotation Tasks

Alizadeh, Meysam, Kubli, Maël, Samei, Zeynab, Dehghani, Shirin, Bermeo, Juan Diego, Korobeynikova, Maria, Gilardi, Fabrizio

arXiv.org Artificial Intelligence

For instance, studies demonstrate that ChatGPT exceeds the performance of crowd-workers in tasks encompassing relevance, stance, sentiment, topic identification, and frame detection (Gilardi, Alizadeh and Kubli, 2023), that it outperforms trained annotators in detecting the political party affiliations of Twitter users (Törnberg, 2023), and that it achieves accuracy scores over 0.6 for tasks such as stance, sentiment, hate speech detection, and bot identification (Zhu et al., 2023). Notably, ChatGPT also demonstrates the ability to correctly classify more than 70% of news as either true or false (Hoes, Altay and Bermeo, 2023), which suggests that LLMs might potentially be used to assist content moderation processes. While the performance of LLMs for text annotation is promising, there are several aspects that remain unclear and require further research. Among these is the impact of different approaches such as zero-shot versus few-shot learning and settings such as varying temperature parameters. Zero-shot learning allows models to predict for unseen tasks, while few-shot learning uses a small number of examples to generalize to new tasks. The conditions under which one approach outperforms the other are not fully understood yet.


Senate bill would hold AI companies liable for harmful content

Engadget

Politicians think they have a way to hold companies accountable for troublesome generative AI: take away their legal protection. Senators Richard Blumenthal and Josh Hawley have introduced a No Section 230 Immunity for AI Act that, as the name suggests, would prevent OpenAI, Google and similar firms from using the Communications Decency Act's Section 230 to waive liability for harmful content and avoid lawsuits. If someone created a deepfake image or sound bite to ruin a reputation, for instance, the tool developer could be held responsible alongside the person who used it. Hawley characterizes the bill as forcing AI creators to "take responsibility for business decisions" as they're developing products. He also casts the legislation as a "first step" toward creating rules for AI and establishing safety measures.


How to rein in the AI threat? Let the lawyers loose

FOX News

Log Off Movement CEO Emma Lembke and teacher Matt Miles discuss the impact of artificial intelligence on kids on'The Story.' Fifty-five percent of Americans are worried by the threat of AI to the future of humanity, according to a recent Monmouth University poll. More than 1,000 AI experts and funders, including Elon Musk and Steve Wozniak, signed a letter calling for a six-month pause in training new AI models. In turn, Time published an article calling for a permanent global ban. However, the problem with these proposals is that they require coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that's much more in line with our existing methods of reining in potentially threatening developments: legal liability.


Supreme Court ruling in YouTube case could have implications for ChatGPT

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. When the U.S. Supreme Court decides in the coming months whether to weaken a powerful shield protecting internet companies, the ruling also could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT. The justices are due to rule by the end of June whether Alphabet Inc's YouTube can be sued over its video recommendations to users. That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.